Skip to content

Develop#624

Merged
MervinPraison merged 3 commits intomainfrom
develop
Jun 7, 2025
Merged

Develop#624
MervinPraison merged 3 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Jun 7, 2025

PR Type

Enhancement, Bug fix, Documentation, Tests


Description

  • Introduces a new, privacy-focused, opt-out telemetry system with minimal anonymous metrics collection and PostHog integration.

  • Adds auto-instrumentation for Agent and PraisonAIAgents classes, ensuring telemetry is enabled by default and can be disabled via environment variables or API.

  • Disables litellm telemetry globally to prevent unwanted data collection.

  • Updates dependencies: adds posthog, bumps litellm and PraisonAI versions, and introduces a telemetry optional dependency group.

  • Provides extensive documentation, including a telemetry module README, technical analysis of telemetry integration issues, and a summary of telemetry implementation and usage.

  • Adds comprehensive test and debug scripts to verify telemetry instrumentation, PostHog integration, disabling mechanisms, and import order.

  • Updates Dockerfiles and Homebrew formula to use PraisonAI version 2.2.31.


Changes walkthrough 📝

Relevant files
Configuration changes
6 files
llm.py
Disable litellm telemetry in LLM class and initialization

src/praisonai-agents/praisonaiagents/llm/llm.py

  • Disables litellm telemetry by setting the environment variable
    LITELLM_TELEMETRY to "False" before any imports.
  • Explicitly sets litellm.telemetry = False after importing litellm in
    the LLM class constructor.
  • +6/-0     
    __init__.py
    Globally disable litellm telemetry in LLM module                 

    src/praisonai-agents/praisonaiagents/llm/init.py

  • Sets the environment variable LITELLM_TELEMETRY to "False" before any
    imports to disable litellm telemetry globally.
  • After importing litellm, explicitly sets litellm.telemetry = False to
    ensure telemetry is disabled.
  • +11/-0   
    memory.py
    Disable litellm telemetry in memory module                             

    src/praisonai-agents/praisonaiagents/memory/memory.py

  • Sets the environment variable LITELLM_TELEMETRY to "False" before any
    imports to disable litellm telemetry.
  • After importing litellm, explicitly sets litellm.telemetry = False.
  • +4/-0     
    Dockerfile.dev
    Update PraisonAI version to 2.2.31 in dev Dockerfile         

    docker/Dockerfile.dev

  • Updated the minimum required version of PraisonAI from 2.2.30 to
    2.2.31 in the development Dockerfile.
  • +1/-1     
    Dockerfile.ui
    Update PraisonAI version to 2.2.31 in UI Dockerfile           

    docker/Dockerfile.ui

  • Increased PraisonAI minimum version from 2.2.30 to 2.2.31 in the UI
    Dockerfile.
  • +1/-1     
    Dockerfile
    Update PraisonAI version to 2.2.31 in main Dockerfile       

    docker/Dockerfile

  • Updated PraisonAI minimum version from 2.2.30 to 2.2.31 in the main
    Dockerfile.
  • +1/-1     
    Enhancement
    4 files
    __init__.py
    Integrate and auto-instrument telemetry in main package init

    src/praisonai-agents/praisonaiagents/init.py

  • Adds lazy-loaded telemetry support with functions and classes
    (get_telemetry, enable_telemetry, disable_telemetry, MinimalTelemetry,
    TelemetryCollector).
  • Adds logic to auto-instrument Agent and PraisonAIAgents classes for
    telemetry if available and enabled.
  • Updates __all__ to include telemetry-related exports.
  • +47/-1   
    __init__.py
    Add minimal telemetry module with atexit flush and API     

    src/praisonai-agents/praisonaiagents/telemetry/init.py

  • Introduces a new telemetry module with privacy-focused, opt-out
    telemetry.
  • Provides functions for getting, enabling, and disabling telemetry.
  • Registers an atexit handler to flush telemetry data on program exit if
    telemetry is enabled.
  • Prepares for auto-instrumentation and lazy initialization.
  • +102/-0 
    telemetry.py
    Implement minimal anonymous telemetry collector with PostHog

    src/praisonai-agents/praisonaiagents/telemetry/telemetry.py

  • Implements a minimal, privacy-focused telemetry collector with
    anonymous metrics.
  • Adds PostHog integration for anonymous event tracking.
  • Provides methods for tracking agent executions, task completions, tool
    usage, errors, and feature usage.
  • Includes a backward-compatible TelemetryCollector class.
  • Supports opt-out via environment variables and programmatic API.
  • +350/-0 
    integration.py
    Add integration module for auto-instrumenting telemetry   

    src/praisonai-agents/praisonaiagents/telemetry/integration.py

  • Adds functions to instrument Agent and PraisonAIAgents workflow
    classes for telemetry.
  • Provides auto-instrumentation for all new Agent and PraisonAIAgents
    instances.
  • Ensures double-instrumentation is avoided.
  • +242/-0 
    Documentation
    4 files
    README.md
    Add README documentation for telemetry module                       

    src/praisonai-agents/praisonaiagents/telemetry/README.md

  • Adds documentation for the telemetry module, including privacy
    guarantees, usage, disabling options, and implementation details.
  • +108/-0 
    telemetry_analysis.md
    Add technical analysis of telemetry integration issues     

    src/praisonai-agents/telemetry_analysis.md

  • Adds a technical analysis document explaining why telemetry events
    were not being sent and recommendations for fixing integration and
    flush issues.
  • +123/-0 
    TELEMETRY_SUMMARY.md
    Add telemetry implementation summary and usage documentation

    src/praisonai-agents/TELEMETRY_SUMMARY.md

  • Added a new markdown file summarizing recent telemetry implementation
    fixes and features.
  • Describes fixed issues, current status, example metrics, and
    instructions to disable telemetry.
  • +33/-0   
    README.md
    Bump PraisonAI version to 2.2.31 in Docker documentation 

    docker/README.md

  • Updated PraisonAI version references from 2.2.30 to 2.2.31 in
    documentation and Docker instructions.
  • +2/-2     
    Dependencies
    5 files
    pyproject.toml
    Update dependencies and add telemetry extra in pyproject.toml

    src/praisonai-agents/pyproject.toml

  • Bumps package version to 0.0.104.
  • Adds posthog as a dependency for telemetry.
  • Updates litellm minimum version to 1.72.0 for memory and llm extras.
  • Adds a new optional dependency group
    [project.optional-dependencies.telemetry].
  • Includes telemetry in the all extra.
  • +12/-5   
    pyproject.toml
    Bump PraisonAI and agent dependency versions                         

    src/praisonai/pyproject.toml

  • Bumps PraisonAI version to 2.2.31.
  • Updates dependency on praisonaiagents to >=0.0.104.
  • +4/-4     
    deploy.py
    Update Dockerfile to use latest PraisonAI version               

    src/praisonai/praisonai/deploy.py

    • Updates Dockerfile creation to use praisonai version 2.2.31.
    +1/-1     
    praisonai.rb
    Update Homebrew formula for new PraisonAI release               

    src/praisonai/praisonai.rb

  • Updates Homebrew formula to use PraisonAI version 2.2.31 and
    corresponding SHA256.
  • +2/-2     
    Dockerfile.chat
    Update Dockerfile to use latest PraisonAI version               

    docker/Dockerfile.chat

    • Updates Dockerfile to require praisonai version 2.2.31.
    +1/-1     
    Tests
    17 files
    debug_telemetry.py
    Add debug script for telemetry instrumentation inspection

    src/praisonai-agents/debug_telemetry.py

  • Adds a debug script to inspect telemetry instrumentation and metrics
    in agents and workflows.
  • +69/-0   
    debug_telemetry_double.py
    Add debug script to detect telemetry double-counting         

    src/praisonai-agents/debug_telemetry_double.py

  • Adds a debug script to check for double-counting in telemetry metrics.
  • +69/-0   
    debug_auto_instrument.py
    Add debug script for auto-instrumentation of telemetry     

    src/praisonai-agents/debug_auto_instrument.py

  • Adds a debug script to test auto-instrumentation of Agent and
    PraisonAIAgents classes for telemetry.
  • +44/-0   
    test_manual_instrumentation.py
    Add test for manual telemetry instrumentation                       

    src/praisonai-agents/test_manual_instrumentation.py

  • Adds a test script to verify manual telemetry instrumentation and
    metric collection.
  • +54/-0   
    test_telemetry_automatic.py
    Add test for automatic telemetry instrumentation                 

    src/praisonai-agents/test_telemetry_automatic.py

  • Adds a test to verify that telemetry works automatically without
    manual setup.
  • +47/-0   
    test_telemetry_simple.py
    Add simple test for telemetry metrics and PostHog               

    src/praisonai-agents/test_telemetry_simple.py

  • Adds a simple test to check telemetry metric collection and PostHog
    client availability.
  • +29/-0   
    test_posthog.py
    Add test for PostHog integration and event tracking           

    src/praisonai-agents/test_posthog.py

  • Adds a test to verify PostHog integration, event tracking, and
    flushing.
  • +64/-0   
    test_posthog_direct.py
    Add test for direct PostHog integration and flush               

    src/praisonai-agents/test_posthog_direct.py

  • Adds a test to verify direct PostHog integration and metric flushing.
  • +43/-0   
    test_posthog_detailed.py
    Add detailed debug test for PostHog integration                   

    src/praisonai-agents/test_posthog_detailed.py

  • Adds a detailed test script for PostHog integration, including debug
    output and event tracking.
  • +139/-0 
    test_posthog_error.py
    Add test for PostHog initialization error handling             

    src/praisonai-agents/test_posthog_error.py

  • Adds a test to check PostHog initialization error handling with
    telemetry parameters.
  • +20/-0   
    test_posthog_import.py
    Add test for PostHog import and telemetry integration       

    src/praisonai-agents/test_posthog_import.py

  • Adds a test to verify PostHog import, initialization, and telemetry
    module integration.
  • +50/-0   
    test_telemetry_integration.py
    Add test to analyze missing telemetry integration               

    src/praisonai-agents/test_telemetry_integration.py

  • Adds a test to demonstrate why telemetry wasn't being sent by default
    and highlights missing integration points.
  • +126/-0 
    test_auto_telemetry_final.py
    Add final test for automatic telemetry instrumentation     

    src/praisonai-agents/test_auto_telemetry_final.py

  • Adds a final test to verify automatic telemetry instrumentation and
    metric collection.
  • +40/-0   
    telemetry_minimal.py
    Add minimal example for telemetry integration                       

    src/praisonai-agents/telemetry_minimal.py

  • Adds a minimal example script showing telemetry integration and metric
    collection.
  • +53/-0   
    test_litellm_telemetry.py
    Add test for disabling litellm telemetry                                 

    src/praisonai-agents/test_litellm_telemetry.py

  • Adds a test script to verify litellm telemetry disabling and mock
    completion.
  • +45/-0   
    test_telemetry_disabled.py
    Add test to verify litellm telemetry is disabled                 

    src/praisonai-agents/test_telemetry_disabled.py

  • Adds a test to verify that litellm telemetry is properly disabled in
    the environment and after LLM instantiation.
  • +52/-0   
    test_import_order.py
    Add test to debug telemetry import order                                 

    src/praisonai-agents/test_import_order.py

  • Adds a test to debug import order and telemetry module availability.
  • +38/-0   
    Additional files
    6 files
    telemetry_example.py +86/-0   
    test.py +2/-2     
    state_based_workflow_example.py +294/-0 
    state_management_example.py +259/-0 
    state_with_memory_example.py +397/-0 
    telemetry_example.py +258/-0 

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features

      • Introduced minimal, privacy-focused telemetry for PraisonAI Agents, enabling anonymous tracking of agent executions, task completions, and tool usage, with opt-out controls.
      • Added automatic and manual telemetry instrumentation, including integration with PostHog for event tracking.
      • Provided comprehensive documentation and example scripts demonstrating telemetry usage, integration, and debugging.
    • Bug Fixes

      • Ensured telemetry is properly disabled in third-party integrations where required.
    • Documentation

      • Added detailed guides and examples for telemetry integration, privacy guarantees, and state management features.
    • Chores

      • Updated dependencies and package versions to improve compatibility and stability.

    - Added 'posthog>=4.0.0' to dependencies and optional 'telemetry' section in pyproject.toml.
    - Updated blog_agent to use llm_config for improved clarity.
    - Disabled memory in PraisonAIAgents instantiation for better control.
    - Introduced telemetry support with lazy loading and fallback functions in __init__.py.
    
    This update improves the integration of telemetry features while maintaining existing functionality.
    …mance and consistency
    
    - Added environment variable to disable litellm telemetry in __init__.py, llm.py, and memory.py.
    - Ensured telemetry is disabled after importing litellm to prevent unnecessary data collection.
    
    This change enhances control over telemetry features while maintaining existing functionality.
    …ts to version 0.0.104
    
    - Updated PraisonAI version in Dockerfiles and Ruby formula.
    - Adjusted dependency versions in pyproject.toml and uv.lock for consistency.
    - Enhanced README to reflect the new versioning.
    
    This change ensures compatibility with the latest features and improvements while maintaining existing functionality.
    @gitguardian
    Copy link
    Copy Markdown

    gitguardian bot commented Jun 7, 2025

    ⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

    Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

    🔎 Detected hardcoded secret in your pull request
    GitGuardian id GitGuardian status Secret Commit Filename
    17682666 Triggered Generic High Entropy Secret 2c4af80 src/praisonai-agents/test_posthog_detailed.py View secret
    🛠 Guidelines to remediate hardcoded secrets
    1. Understand the implications of revoking this secret by investigating where it is used in your code.
    2. Replace and store your secret safely. Learn here the best practices.
    3. Revoke and rotate this secret.
    4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

    To avoid such incidents in the future consider


    🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

    @MervinPraison MervinPraison merged commit 3a5b84b into main Jun 7, 2025
    4 of 12 checks passed
    @coderabbitai
    Copy link
    Copy Markdown
    Contributor

    coderabbitai bot commented Jun 7, 2025

    Caution

    Review failed

    The pull request is closed.

    Walkthrough

    This change updates the PraisonAI and PraisonAIAgents package versions across multiple Dockerfiles, deployment scripts, documentation, and project configuration files. The minimum required version is incremented from 2.2.30 to 2.2.31 for PraisonAI and from 0.0.102 to 0.0.104 for PraisonAIAgents, with corresponding checksum and dependency updates.

    Changes

    File(s) Change Summary
    docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui Updated praisonai pip install version from 2.2.30 to 2.2.31.
    docker/README.md Updated documentation to reflect PraisonAI version bump from 2.2.30 to 2.2.31.
    src/praisonai/praisonai.rb Updated source URL and SHA256 checksum for Praisonai version 2.2.31.
    src/praisonai/praisonai/deploy.py Updated Dockerfile creation logic to install Praisonai version 2.2.31.
    src/praisonai/pyproject.toml Bumped PraisonAI version to 2.2.31 and PraisonAIAgents dependency to >=0.0.104.

    Sequence Diagram(s)

    No sequence diagram generated as the changes are version and configuration updates only.

    Possibly related PRs

    • MervinPraison/PraisonAI#595: Updates PraisonAI package version from 2.2.25 to 2.2.26 in Dockerfiles and related files, similar version bump in the same files.
    • MervinPraison/PraisonAI#604: Updates PraisonAI version from 2.2.27 to 2.2.28 in Dockerfiles and related files, analogous version bump.
    • MervinPraison/PraisonAI#611: Updates PraisonAI version from 2.2.29 to 2.2.30 in Dockerfiles and related files, sequential version bump in the same components.

    Suggested labels

    Review effort 1/5

    Poem

    A hop, a skip, a version jump—
    From thirty to thirty-one we bump!
    Dockerfiles and docs align,
    Checksums match, the stars all shine.
    With every hop, our code grows bright—
    Cheers to updates, done just right!
    🐇✨


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 250ad7d and 79ab958.

    ⛔ Files ignored due to path filters (2)
    • src/praisonai-agents/uv.lock is excluded by !**/*.lock
    • src/praisonai/uv.lock is excluded by !**/*.lock
    📒 Files selected for processing (42)
    • docker/Dockerfile (1 hunks)
    • docker/Dockerfile.chat (1 hunks)
    • docker/Dockerfile.dev (1 hunks)
    • docker/Dockerfile.ui (1 hunks)
    • docker/README.md (2 hunks)
    • src/praisonai-agents/TELEMETRY_SUMMARY.md (1 hunks)
    • src/praisonai-agents/debug_auto_instrument.py (1 hunks)
    • src/praisonai-agents/debug_telemetry.py (1 hunks)
    • src/praisonai-agents/debug_telemetry_double.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/__init__.py (2 hunks)
    • src/praisonai-agents/praisonaiagents/llm/__init__.py (2 hunks)
    • src/praisonai-agents/praisonaiagents/llm/llm.py (2 hunks)
    • src/praisonai-agents/praisonaiagents/memory/memory.py (2 hunks)
    • src/praisonai-agents/praisonaiagents/telemetry/README.md (1 hunks)
    • src/praisonai-agents/praisonaiagents/telemetry/__init__.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/telemetry/integration.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/telemetry/telemetry.py (1 hunks)
    • src/praisonai-agents/pyproject.toml (5 hunks)
    • src/praisonai-agents/telemetry_analysis.md (1 hunks)
    • src/praisonai-agents/telemetry_example.py (1 hunks)
    • src/praisonai-agents/telemetry_minimal.py (1 hunks)
    • src/praisonai-agents/test.py (2 hunks)
    • src/praisonai-agents/test_auto_telemetry_final.py (1 hunks)
    • src/praisonai-agents/test_import_order.py (1 hunks)
    • src/praisonai-agents/test_litellm_telemetry.py (1 hunks)
    • src/praisonai-agents/test_manual_instrumentation.py (1 hunks)
    • src/praisonai-agents/test_posthog.py (1 hunks)
    • src/praisonai-agents/test_posthog_detailed.py (1 hunks)
    • src/praisonai-agents/test_posthog_direct.py (1 hunks)
    • src/praisonai-agents/test_posthog_error.py (1 hunks)
    • src/praisonai-agents/test_posthog_import.py (1 hunks)
    • src/praisonai-agents/test_telemetry_automatic.py (1 hunks)
    • src/praisonai-agents/test_telemetry_disabled.py (1 hunks)
    • src/praisonai-agents/test_telemetry_integration.py (1 hunks)
    • src/praisonai-agents/test_telemetry_simple.py (1 hunks)
    • src/praisonai-agents/tests/state_based_workflow_example.py (1 hunks)
    • src/praisonai-agents/tests/state_management_example.py (1 hunks)
    • src/praisonai-agents/tests/state_with_memory_example.py (1 hunks)
    • src/praisonai-agents/tests/telemetry_example.py (1 hunks)
    • src/praisonai/praisonai.rb (1 hunks)
    • src/praisonai/praisonai/deploy.py (1 hunks)
    • src/praisonai/pyproject.toml (4 hunks)
    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    As gemini-code-assist, I've reviewed this pull request titled "Develop". Based on the changes, which primarily focus on the praisonai-agents and PraisonAI components, the main intent appears to be the implementation and integration of a new minimal telemetry system within praisonai-agents, along with significant dependency updates and version bumps for both libraries. The PR also includes efforts to explicitly disable litellm's built-in telemetry to avoid conflicts or unwanted data collection. Unfortunately, the pull request description was empty, so this summary is based solely on the code changes.

    Highlights

    • Minimal Telemetry System: A new, privacy-focused telemetry module has been added to praisonai-agents. It tracks anonymous usage metrics like agent executions, task completions, tool usage, and errors, with an opt-out mechanism via environment variables (PRAISONAI_TELEMETRY_DISABLED, PRAISONAI_DISABLE_TELEMETRY, DO_NOT_TRACK).
    • Automatic Instrumentation: Telemetry is now automatically integrated with Agent and PraisonAIAgents classes upon import (if enabled), wrapping key methods (chat, execute_task, start, astart, execute_tool) to track events without requiring manual calls in user code.
    • Dependency Updates: Key dependencies have been updated, including praisonaiagents (to 0.0.104), litellm (to >=1.72.0), openai (to 1.84.0), and posthog (added as a dependency and updated to 4.3.2). Several other dependencies were added or removed.
    • Disable LiteLLM Telemetry: Code has been added in multiple places (llm/__init__.py, llm/llm.py, memory/memory.py) to explicitly disable litellm's built-in telemetry via environment variables and programmatic settings.
    • Version Bumps: The praisonaiagents package version has been bumped to 0.0.104, and the main PraisonAI package version has been bumped to 2.2.31. Dockerfiles and the Homebrew formula have been updated accordingly.
    • New Examples and Tests: Several new files have been added, including debug scripts for telemetry, examples demonstrating telemetry usage and state management (state_based_workflow_example.py, state_management_example.py, state_with_memory_example.py), and various specific telemetry test cases.

    Changelog

    Click here to see the changelog
    • docker/Dockerfile
      • Updated praisonai version requirement to >=2.2.31.
    • docker/Dockerfile.chat
      • Updated praisonai version requirement to >=2.2.31.
    • docker/Dockerfile.dev
      • Updated praisonai version requirement to >=2.2.31.
    • docker/Dockerfile.ui
      • Updated praisonai version requirement to >=2.2.31.
    • docker/README.md
      • Updated PraisonAI version mention in Package Versions section to >=2.2.31.
      • Updated PraisonAI version mention in Version Pinning example to ==2.2.31.
    • src/praisonai-agents/TELEMETRY_SUMMARY.md
      • Added new file summarizing telemetry implementation fixes and current status.
    • src/praisonai-agents/debug_auto_instrument.py
      • Added new debug script for testing telemetry auto-instrumentation import order.
    • src/praisonai-agents/debug_telemetry.py
      • Added new debug script for general telemetry instrumentation debugging.
    • src/praisonai-agents/debug_telemetry_double.py
      • Added new debug script to check for double-counting in telemetry metrics.
    • src/praisonai-agents/praisonaiagents/init.py
      • Added imports for telemetry functions and classes (get_telemetry, enable_telemetry, disable_telemetry, MinimalTelemetry, TelemetryCollector).
      • Added conditional call to auto_instrument_all() after imports if telemetry is available and enabled.
      • Added telemetry functions and classes to __all__ export list.
    • src/praisonai-agents/praisonaiagents/llm/init.py
      • Added import for os.
      • Added code to set LITELLM_TELEMETRY environment variable to "False" before litellm import.
      • Added try...except block to explicitly set litellm.telemetry = False after litellm import.
    • src/praisonai-agents/praisonaiagents/llm/llm.py
      • Added import for os.
      • Added code to set LITELLM_TELEMETRY environment variable to "False" before imports.
      • Added code to explicitly set litellm.telemetry = False within the LLM.__init__ method.
    • src/praisonai-agents/praisonaiagents/memory/memory.py
      • Added import for os.
      • Added code to set LITELLM_TELEMETRY environment variable to "False" before imports.
      • Added code to explicitly set litellm.telemetry = False within the try block where litellm is imported.
    • src/praisonai-agents/praisonaiagents/telemetry/README.md
      • Added new README file detailing the telemetry module, privacy guarantees, disabling methods, usage, and implementation.
    • src/praisonai-agents/praisonaiagents/telemetry/init.py
      • Added module docstring.
      • Imported MinimalTelemetry and TelemetryCollector.
      • Defined public functions get_telemetry, enable_telemetry, disable_telemetry.
      • Added atexit registration to call get_telemetry().flush() on program exit if telemetry is enabled.
    • src/praisonai-agents/praisonaiagents/telemetry/integration.py
      • Added new module containing functions for instrumenting Agent and PraisonAIAgents classes.
      • Implemented instrument_agent to wrap agent methods (chat, start, run, execute_tool) for tracking.
      • Implemented instrument_workflow to wrap workflow methods (execute_task, start, astart) for tracking.
      • Implemented auto_instrument_all to wrap Agent.__init__ and PraisonAIAgents.__init__ for automatic instrumentation.
    • src/praisonai-agents/praisonaiagents/telemetry/telemetry.py
      • Added the core MinimalTelemetry class for collecting anonymous metrics.
      • Implemented tracking methods (track_agent_execution, track_task_completion, track_tool_usage, track_error, track_feature_usage).
      • Added PostHog integration (conditional on posthog availability) with a specific API key and host.
      • Implemented flush method to send collected metrics to PostHog.
      • Added logic to check environment variables for disabling telemetry.
      • Implemented get_metrics to retrieve current collected data.
      • Added TelemetryCollector class for backward compatibility.
      • Added global _telemetry_instance and helper functions get_telemetry, disable_telemetry, enable_telemetry.
    • src/praisonai-agents/pyproject.toml
      • Updated praisonaiagents version to 0.0.104.
      • Added posthog>=3.0.0 as a core dependency.
      • Updated litellm version to >=1.72.0 in memory and llm optional dependencies.
      • Added posthog>=3.0.0 to the telemetry optional dependency group.
      • Added praisonaiagents[telemetry] to the all optional dependency group.
      • Removed dependencies: dataclasses-json, langchain-community, monotonic, mypy-extensions, marshmallow, typing-inspect.
      • Added dependencies: json-repair, langchain-neo4j, neo4j-graphrag, pypdf, types-pyyaml.
      • Updated openai version to 1.84.0.
      • Updated posthog version to 4.3.2 in uv.lock.
      • Updated praisonaiagents version to 0.0.104 in uv.lock.
      • Updated various other dependency versions in uv.lock.
    • src/praisonai-agents/test.py
      • Changed llm parameter in Agent initialization from a string to llm_config.
      • Changed memory parameter in PraisonAIAgents initialization from True to False.
    • src/praisonai-agents/test_auto_telemetry_final.py
      • Added new test script for final automatic telemetry check.
    • src/praisonai-agents/test_import_order.py
      • Added new test script to debug telemetry import order issues.
    • src/praisonai-agents/test_litellm_telemetry.py
      • Added new test script to verify litellm telemetry is disabled.
    • src/praisonai-agents/test_manual_instrumentation.py
      • Added new test script to verify manual telemetry instrumentation.
    • src/praisonai-agents/test_posthog.py
      • Added new test script to verify PostHog integration.
    • src/praisonai-agents/test_posthog_detailed.py
      • Added new detailed test script for PostHog integration with debug output.
    • src/praisonai-agents/test_posthog_direct.py
      • Added new test script to test PostHog integration directly.
    • src/praisonai-agents/test_posthog_error.py
      • Added new test script to test PostHog initialization error (likely related to events_to_ignore).
    • src/praisonai-agents/test_posthog_import.py
      • Added new test script to test PostHog import and initialization.
    • src/praisonai-agents/test_telemetry_automatic.py
      • Added new test script to verify automatic telemetry.
    • src/praisonai-agents/test_telemetry_disabled.py
      • Added new test script to verify litellm telemetry is disabled.
    • src/praisonai-agents/test_telemetry_integration.py
      • Added new test script demonstrating telemetry integration issues and solutions.
    • src/praisonai-agents/test_telemetry_simple.py
      • Added new simple test script for telemetry.
    • src/praisonai-agents/tests/state_based_workflow_example.py
      • Added new example demonstrating state-based workflow control.
    • src/praisonai-agents/tests/state_management_example.py
      • Added new example demonstrating state management features.
    • src/praisonai-agents/tests/state_with_memory_example.py
      • Added new example demonstrating state with memory integration.
    • src/praisonai-agents/tests/telemetry_example.py
      • Added new example demonstrating various telemetry usages (basic, manual, advanced, errors).
    • src/praisonai/praisonai.rb
      • Updated Homebrew formula URL and SHA256 for version 2.2.31.
    • src/praisonai/praisonai/deploy.py
      • Updated hardcoded praisonai version in Dockerfile template to 2.2.31.
    • src/praisonai/pyproject.toml
      • Updated PraisonAI version to 2.2.31.
      • Updated praisonaiagents dependency to >=0.0.104.
      • Updated PraisonAI version in [tool.poetry] section.
      • Updated praisonaiagents dependency in [tool.poetry.dependencies].
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 PR contains tests
    🔒 Security concerns

    Sensitive information exposure:
    The code contains a hardcoded PostHog API key at src/praisonai-agents/praisonaiagents/telemetry/telemetry.py:87 (project_api_key='phc_skZpl3eFLQJ4iYjsERNMbCO6jfeSJi2vyZlPahKgxZ7'). While this might be intended for public use, hardcoded API keys in source code are generally considered a security risk. Consider using environment variables or a configuration file to store this key.

    ⚡ Recommended focus areas for review

    Privacy Considerations

    The telemetry implementation sends data to PostHog with an API key hardcoded in the source code. While the code attempts to be privacy-focused, verify that the data sent is truly anonymous and that no sensitive information is collected.

        self._posthog = Posthog(
            project_api_key='phc_skZpl3eFLQJ4iYjsERNMbCO6jfeSJi2vyZlPahKgxZ7',
            host='https://eu.i.posthog.com',
            disable_geoip=True
        )
    except:
        self._posthog = None
    Potential Double Counting

    The instrumentation wraps multiple methods (chat, start, run) that might be called in sequence, potentially causing duplicate telemetry events for a single operation. This could lead to inflated metrics.

    if original_chat:
        @wraps(original_chat)
        def instrumented_chat(*args, **kwargs):
            try:
                result = original_chat(*args, **kwargs)
                telemetry.track_agent_execution(agent.name, success=True)
                return result
            except Exception as e:
                telemetry.track_agent_execution(agent.name, success=False)
                telemetry.track_error(type(e).__name__)
                raise
    
        agent.chat = instrumented_chat
    
    # Wrap start method if it exists
    if original_start:
        @wraps(original_start)
        def instrumented_start(*args, **kwargs):
            try:
                result = original_start(*args, **kwargs)
                telemetry.track_agent_execution(agent.name, success=True)
                return result
            except Exception as e:
                telemetry.track_agent_execution(agent.name, success=False)
                telemetry.track_error(type(e).__name__)
                raise
    
        agent.start = instrumented_start
    
    # Wrap run method if it exists
    if original_run:
        @wraps(original_run)
        def instrumented_run(*args, **kwargs):
            try:
                result = original_run(*args, **kwargs)
                telemetry.track_agent_execution(agent.name, success=True)
                return result
            except Exception as e:
                telemetry.track_agent_execution(agent.name, success=False)
                telemetry.track_error(type(e).__name__)
                raise
    
        agent.run = instrumented_run
    Exception Handling

    The telemetry auto-instrumentation silently fails if there are any issues. While this prevents crashes, it might hide important errors and make debugging difficult if telemetry isn't working as expected.

    try:
        # Only instrument if telemetry is enabled
        _telemetry = get_telemetry()
        if _telemetry and _telemetry.enabled:
            from .telemetry.integration import auto_instrument_all
            auto_instrument_all(_telemetry)
    except Exception:
        # Silently fail if there are any issues
        pass

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    This pull request introduces a significant new telemetry system into the praisonai-agents package, updates various dependencies including praisonai and praisonaiagents versions, and adds numerous test and example scripts related to the new telemetry features. The telemetry implementation appears well-thought-out, covering aspects like auto-instrumentation, opt-out mechanisms, privacy considerations (e.g., anonymous IDs, disabling GeoIP for PostHog), and backward compatibility for the old TelemetryCollector.

    Key changes include:

    • Addition of a telemetry submodule in praisonaiagents with MinimalTelemetry and integration logic.
    • Automatic instrumentation of Agent and PraisonAIAgents classes.
    • Use of atexit to ensure telemetry data is flushed on program exit.
    • Integration with PostHog for telemetry data collection.
    • Disabling of litellm's own telemetry.
    • Version bumps for praisonai to v2.2.31 and praisonaiagents to v0.0.104.

    The changes are generally well-implemented. However, there are a few areas, particularly around exception handling in the telemetry module, that could be refined for better robustness and debuggability.

    Please consider updating the pull request title and description to be more informative about the changes included. A title like "feat: Implement Telemetry System for PraisonAI Agents and Update Dependencies" would be more descriptive than "Develop".

    Summary of Findings

    • Broad Exception Handling in Telemetry Initialization: In src/praisonai-agents/praisonaiagents/__init__.py (auto-instrumentation) and src/praisonai-agents/praisonaiagents/telemetry/telemetry.py (PostHog client setup and flush), general except Exception: or bare except: clauses are used. While intended for silent failure of the optional telemetry feature, this can hide underlying issues. It's recommended to catch more specific exceptions or at least log the caught exception at a debug level to aid in troubleshooting.
    • Dependency Updates: The PR includes updates to several key dependencies, including litellm (v1.50.0 to v1.72.0) and posthog (v3.7.5 to v4.3.2). These are significant version jumps and should be thoroughly tested to ensure no regressions or unexpected behavior are introduced.
    • New Telemetry System: A comprehensive telemetry system has been added to praisonaiagents. This is a major new feature that includes automatic instrumentation, opt-out mechanisms, and PostHog integration. The implementation is largely robust, with the aforementioned points on exception handling being areas for potential improvement.
    • Missing Newlines at End of Files: Several new script files (.py) and Markdown files (.md) are missing a newline character at the end of the file. While this is often a minor stylistic issue, many linters and POSIX standards prefer files to end with a newline. This was found in: TELEMETRY_SUMMARY.md, debug_auto_instrument.py, debug_telemetry.py, debug_telemetry_double.py, telemetry/README.md, telemetry/__init__.py, telemetry/integration.py, telemetry/telemetry.py, telemetry_analysis.md, telemetry_example.py, telemetry_minimal.py, test_auto_telemetry_final.py, test_import_order.py, test_litellm_telemetry.py, test_manual_instrumentation.py, test_posthog.py, test_posthog_detailed.py, test_posthog_direct.py, test_posthog_error.py, test_posthog_import.py, test_telemetry_automatic.py, test_telemetry_disabled.py, test_telemetry_integration.py, test_telemetry_simple.py, tests/state_based_workflow_example.py, tests/state_management_example.py, tests/state_with_memory_example.py, tests/telemetry_example.py. (Note: Not commented inline due to review settings for low severity issues).
    • Unused Import: In src/praisonai-agents/praisonaiagents/telemetry/integration.py, import time is present but time is not used within the file. (Note: Not commented inline due to review settings for low severity issues).

    Merge Readiness

    This pull request introduces valuable telemetry features and necessary version updates. However, due to the medium-severity concerns regarding broad exception handling in the new telemetry module, I recommend addressing these points to improve the robustness and maintainability of the code. Once these suggestions are considered, the PR should be in a much better state for merging. As an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members before merging.

    Comment on lines +73 to +75
    except Exception:
    # Silently fail if there are any issues
    pass
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    Is it possible to catch more specific exceptions here instead of a general Exception? While the comment indicates that silent failure is intentional for auto-instrumentation, catching specific exceptions (e.g., ImportError, AttributeError, or custom exceptions from the telemetry module) could provide more targeted resilience. If a general catch is necessary, consider logging the exception at a DEBUG level. This could help diagnose issues during development or if telemetry setup becomes problematic in certain environments, without crashing the main application.

    Comment on lines +91 to +92
    except:
    self._posthog = None
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    Could this except: block be more specific, for example, except Exception as e:? Catching a bare except: can mask unexpected errors, including system-exiting exceptions like SystemExit or KeyboardInterrupt. If the goal is to catch any error during PostHog initialization and proceed without it, explicitly catching Exception as e and perhaps logging e at a debug level would be safer and more informative for debugging potential issues with PostHog setup.

    Comment on lines +223 to +224
    except:
    pass
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    Similar to the PostHog initialization, could this except: block be more specific, like except Exception as e:? This would prevent catching system-exiting exceptions and allow for logging the specific error if PostHog data submission fails. Logging the error (e.g., at a debug level) could be valuable for diagnosing intermittent network issues or problems with the PostHog service, even if the application itself continues to run.

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    Copy link
    Copy Markdown

    @cursor cursor bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Bug: Double Instrumentation Causes Recursive Calls

    The auto_instrument_all function monkey-patches Agent.__init__ and PraisonAIAgents.__init__ without checking if they have already been instrumented. Calling this function multiple times results in nested wrapping of the __init__ methods, which can lead to recursive calls, potential stack overflow, and duplicate telemetry tracking for the same events. A check should be added to prevent re-wrapping, similar to the checks in instrument_agent and instrument_workflow.

    src/praisonai-agents/praisonaiagents/telemetry/integration.py#L199-L238

    # Auto-instrumentation helper
    def auto_instrument_all(telemetry: Optional['MinimalTelemetry'] = None):
    """
    Automatically instrument all new instances of Agent and PraisonAIAgents.
    This should be called after enabling telemetry.
    Args:
    telemetry: Optional telemetry instance (uses global if not provided)
    """
    if not telemetry:
    from .telemetry import get_telemetry
    telemetry = get_telemetry()
    if not telemetry.enabled:
    return
    try:
    # Import the classes
    from ..agent.agent import Agent
    from ..agents.agents import PraisonAIAgents
    # Store original __init__ methods
    original_agent_init = Agent.__init__
    original_workflow_init = PraisonAIAgents.__init__
    # Wrap Agent.__init__
    @wraps(original_agent_init)
    def agent_init_wrapper(self, *args, **kwargs):
    original_agent_init(self, *args, **kwargs)
    instrument_agent(self, telemetry)
    # Wrap PraisonAIAgents.__init__
    @wraps(original_workflow_init)
    def workflow_init_wrapper(self, *args, **kwargs):
    original_workflow_init(self, *args, **kwargs)
    instrument_workflow(self, telemetry)
    # Apply wrapped constructors
    Agent.__init__ = agent_init_wrapper
    PraisonAIAgents.__init__ = workflow_init_wrapper

    Fix in Cursor


    BugBot free trial expires on June 11, 2025
    You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.

    Was this report helpful? Give feedback by reacting with 👍 or 👎

    @coderabbitai coderabbitai bot mentioned this pull request Jun 14, 2025
    shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    1 participant